Skip to main content

Configure Sample Applications

This section provides a flexible workflow for running and customizing advanced multimedia and AI sample applications using the Qualcomm® Intelligent Multimedia Product (QIMP) SDK on Ubuntu. Developers can define input/output sources, runtime targets, and model precision using JSON configuration files—enabling seamless evaluation across CPU, GPU, and DSP. With support for frameworks like TFLite, QNN, and SNPE, and integration with AI Hub, this setup is ideal for building and optimizing edge AI pipelines tailored to specific use cases.

Running advanced multimedia and AI sample applications using the QIMP SDK on Ubuntu enables developers to:

  • Prototype and validate AI workloads across heterogeneous compute targets (CPU, GPU, DSP), helping teams choose the most efficient runtime for their use case.
  • Customize application behavior using JSON-based configuration, allowing precise control over input/output sources, model types, and runtime parameters.
  • Accelerate development and deployment by leveraging pre-integrated models from AI Hub and supported frameworks like TFLite, QNN, and SNPE.
  • Benchmark performance and optimize resource usage, which is critical for embedded systems and edge devices where compute and power budgets are constrained.
  • Ensure compatibility and reproducibility across Qualcomm platforms by using standardized scripts and directory structures for models, labels, and media assets.

Prerequisites

  • Ubuntu OS should be flashed
  • Terminal access with appropriate permissions
  • Basic familiarity with JSON configuration files and runtime environment variables.
  • Access to AI Hub for model selection and export. Create AI Hub account
  • If you haven’t previously installed the PPA packages, please run the following steps to install them.
    git clone -b ubuntu_setup --single-branch https://github.com/rubikpi-ai/rubikpi-script.git 
    cd rubikpi-script
    ./install_ppa_pkgs.sh

Use the steps below to configure the script and run the model.

1️⃣ Download and Run the Script This script will automatically fetch all required packages for running the sample applications, including:

  • Models
  • Labels
  • Media files
cd /home/ubuntu 
curl -L -O https://raw.githubusercontent.com/quic/sample-apps-for-qualcomm-linux/refs/heads/main/download_artifacts.sh
sudo chmod +x download_artifacts.sh
sudo ./download_artifacts.sh -v GA1.5-rel -c QCS6490

Explanation

  • Use the -v flag to define the version you want to work with (e.g., GA1.5-rel).
  • Use the -c flag to define the chipset your device is using.(e.g., QCS6490).

2️⃣ Verify Model/Label/Media Files
Before launching any sample applications, make sure the required files are in place.

Check the following directories:

  • Model files/etc/models/
  • Label files/etc/labels/
  • Media files/etc/media/
note

These files are essential for AI sample applications to function correctly. If they’re missing, re-run the artifact download script.

3️⃣ Updating JSON Config File
To run sample applications with a specific functionality, you’ll need a properly configured JSON file.

What to Do

  • Update the required JSON config file based on your model and config requirements.
  • Edit the file at e.g. - /etc/configs/config_classification.json to match your use case:

Configuration Parameters

Details

Update your JSON config file with the following key parameters:

  • Input Source
    • Camera
    • File (Filesrc)
    • RTSP Stream
  • Output Source
    • Waylandsink
    • Filesink
    • RTSP Stream
  • Runtime Options
    • CPU
    • GPU
    • DSP
  • Precision
    • INT8 / INT16
    • W8A8 / W8A16
    • FP32
  • Model Type
    • Select from available models in AI Hub
  • Labels
    • Select the correct labels file

Sample Application Configuration Matrix

Sample App NameDetailsAI Hub Model TypeRuntimeScript to Use
gst-ai-classificationImage classificationMobileNet-v2, ResNet101, GoogLeNet, MobileNet-v3-Large, ResNet18, ResNeXt50, ResNeXt101, SqueezeNet, WideResNet50, ShufflenetCPU, GPU, DSPUpdate JSON
gst-ai-object-detectionObject detectionYolox, Yolov7, Yolov8-Detection (manual export)CPU, GPU, DSPExport model from AI Hub; Update script for Yolox/Yolov7 – Update JSON
gst-ai-pose-detectionPose detectionhrnet_poseCPU, GPU, DSPTFLite works by default; update script for precision/runtime – Update JSON
gst-ai-segmentationImage segmentationFFNet-40S, FFNet-54S, FFNet-78SCPU, GPU, DSPUpdate JSON
gst-ai-superresolutionVideo super-resolutionquicksrnetsmall, QuickSRNetMedium, QuickSRNetLarge, XLSRCPU, GPU, DSPUpdate JSON
gst-ai-multistream-batch-inferenceMultistream batch inferenceYoloV8-Detection (batch 4), DeeplabV3 (batch 4)CPU, GPU, DSPExport model from AI Hub; Update script – Update JSON
gst-ai-face-detectionFace detectionface_det_liteCPU, GPU, DSPUpdate JSON
gst-ai-face-recognitionFace recognitionface_det_lite, face_attrib_net, facemap_3dmmCPU, GPU, DSPFace registration required; otherwise output is 'unknown face recognized'
gst-ai-metadata-parser-exampleMetadata parsingYolov8-DetectionCPU, GPU, DSPExport model from AI Hub
gst-ai-usb-camera-appAI USB cameraYolov8-DetectionCPU, GPU, DSPExport model from AI Hub
gst-ai-parallel-inferenceParallel inferencingYolov8-Detection, Deeplab, Hrnet, Inceptionv3CPU, GPU, DSPExport model from AI Hub; Update JSON for other models
gst-ai-daisychain-detection-classificationDaisy chain detection and classificationInceptionv3 + YoloV8CPU, GPU, DSPExport model from AI Hub; Update JSON for other models
gst-ai-audio-classificationAudio classificationInceptionv3 + YoloV8CPU, GPU, DSPExport model from AI Hub; Update JSON for other models
gst-ai-smartcodec-exampleAI smart codecnInceptionv3 + YoloV8CPU, GPU, DSPExport model from AI Hub; Update JSON for other models

Use the SSH/SBC terminal to launch your sample application.

note

In case if the terminal is in root, the we need to set the following environment. Otherwise for ubuntu user, it is not required. export XDG_RUNTIME_DIR=/run/user/$(id -u ubuntu)

Example

For the AI Classification sample application, open the /etc/configs/config_classification.json configuration file and update default labels file.
Change:
"labels": "/etc/labels/classification.labels"
to:
"labels": "/etc/labels/imagenet_labels.txt"

Run the AI classification sample application.

gst-ai-classification

To display the available help options, run the following command in the SSH shell:

gst-ai-classification -h

To stop the use case, use CTRL + C

Reference Docs:

To further explore sample applications, see the Qualcomm Intelligent Multimedia SDK (IM SDK) Reference Guide. Qualcomm Intelligent Multimedia SDK (IM SDK) Reference